Jump to content

Mode effect

From Wikipedia, the free encyclopedia

Mode effect is a broad term referring to a phenomenon where a particular survey administration mode causes different data to be collected. For example, when asking a question using two different modes (e.g. paper and telephone), responses to one mode may be significantly and substantially different from responses given in the other mode. Mode effects are a methodological artifact, limiting the ability to compare results from different modes of collection.

Theory

[edit]

Particular survey modes put respondents into different frames of mind, referred to as a mental "script".[1] This can affect the results they give. For example:

  • Face-to-face surveys prompt a "guest" script. Respondents are more likely to treat face-to-face interviewers graciously and hospitably, leading them to be more agreeable and affecting their answers. Differences between the interviewers administering the survey can also lead to a range of "interviewer effects" on survey results.
  • Phone interviews prompt a "solicitor" or "telemarketer" script. Respondents may place less priority on telephone interviews, making them more likely to satisfice (answer questions with the least possible effort) in order to finish the interview sooner. Wariness of who may be on the other end of the phone can also lead respondents to provide more socially acceptable answers than would be given in other survey modes.

Mode effects are likely to be larger when the differences between modes are larger [citation needed]. Face-to-face interviews are substantially different from self-completed pen-and-paper forms. By contrast, web-surveys, pen-and-paper and other self-completed forms are quite similar (each requiring respondents to read and privately respond to a question) and therefore mode effects may be minimised.

Users of surveys must consider the potential for mode effects when comparing results from studies in different modes. However, this is difficult as mode effects can be complex and subject to interactions between respondent demographics, subject matter and mode. Unless the mode effects are formally investigated for the survey instrument, it is difficult to quantify their size and qualitative judgments by experts familiar with the subject matter and respective modes are required instead.

Social desirability bias

[edit]

Studies of mode effects are sometimes contradictory but some general patterns do emerge. For example, social desirability bias tends to be highest for telephone surveys and lowest for web surveys:[2][3]

  1. Telephone surveys
  2. Face-to-face surveys
  3. IVR surveys
  4. Mail surveys
  5. Web surveys

Therefore, as the data collected on sensitive topics (such as sexual behavior or illicit activities) will change depending on the administration mode, researchers should be cautious of combining data or comparing results from different modes.

Differences in questions between modes

[edit]

Some modes require different question wording from others, in order to suit the features of the mode. For example, self-complete forms can use lists of examples or extensive instructions to help respondents answer relatively complex questions. By contrast, in telephone interviews, respondents are often limited by their working memory and are unlikely to understand a long question with multiple sub-clauses. Another example is a 'matrix' of questions, commonly found on self-complete forms, cannot be read out easily in a verbal interview; rather a matrix would generally need to be scripted as a series of individual questions.

Differences in question wording across modes may cause different data to be collected by different modes. However, this is not always the case, and appropriate adaptation of questions to a new mode can yield comparable data [citation needed]. Survey designers should consider the conventions of the mode when adapting questions. For example, while it may be acceptable to require respondents to calculate total figures themselves in a paper form, respondents may perceive it to be burdensome if this is required in a web form (where respondents might expect totals to be calculated automatically by the computer). This may in turn change their attitude toward the form, altering their behaviour and ultimately changing the data collected.

Identifying and resolving mode effects

[edit]

Mode effects can be identified by embedding an experiment within the survey, where a proportion of respondents are allocated to each mode. Differences in results from each mode should identify the 'mode effect' for this particular survey.

Once a mode effect has been quantified, it may be possible to use this information to reprocess existing data and allow comparison between data collected in different modes (e.g. by backcasting a time series to determine what past results 'would have' been had they been administered in the new mode).

Differential coverage between modes

[edit]

Different administration modes may inherently exclude some parts of the target population. This potentially biases the sample that is taken, and changes the data from what would have been collected using another mode. For example, people without a home phone are excluded from Random Digit Dialling (RDD) surveys, and people without internet access are unlikely to complete a web survey. This means different samples are taken from the population when using different modes. Unless experiments are specifically designed to investigate differential coverage, mode effects will be confounded by coverage [4] , and significant differences between modes/experimental conditions could have several explanations:

  • properties of the mode;com3
  • different 'types' of people responding to the different modes;
  • both the mode properties and different 'types' of respondents (in an additive fashion);
  • an interaction, where some respondents are affected by properties of the mode but others aren't.

This problem is exacerbated when in 'live' administration of a survey, multiple modes are used. Some surveys use multiple modes, allowing respondents to choose the most convenient method for them. That is, different 'types' of respondents are expected to complete different modes based on their own choices. In this case, mode effects are difficult to quantify as randomly allocating respondents to a condition does not reflect their preference. Such an experiment lacks external validity and results would not directly generalise to situations offering respondents a choice. Failing to randomly allocate participants to a condition (i.e. allowing them to have a choice, thereby retaining external validity) would mean apparent differences between modes reflect the combined effect of a) different respondent types choosing each mode and b) any mode effects.

References

[edit]
  1. ^ Groves, Robert M. (1989). Survey Errors and Survey Costs, New York: Wiley-Interscience.
  2. ^ Frauke Kreuter, Stanley Presser, and Roger Tourangeau. "Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity". Public Opin Q (2008) 72(5): 847–865 first published online January 26, 2009 doi:10.1093/poq/nfn063
  3. ^ Allyson L. Holbrook, Melanie C. Green And Jon A. Krosnick. "Telephone versus Face-to-Face Interviewing of National Probability Samples with Long Questionnaires: Comparisons of Respondent Satisficing and Social Desirability Response Bias". Public Opin Q (2003) 67 (1): 79–125. doi:10.1086/346010.
  4. ^ de Leeuw, E. (2005). To mix or not to mix data collection modes in surveys. Journal of Official Statistics, 21(2): 233-255.